Creativity is an indispensable part of human cognition and also an inherent part of how we make sense of the world. Metaphorical abstraction is fundamental in communicating creative ideas through nuanced relationships between abstract concepts such as feelings. While computer vision benchmarks and approaches predominantly focus on understanding and generating literal interpretations of images, metaphorical comprehension of images remains relatively unexplored. Towards this goal, we introduce MetaCLUE, a set of vision tasks on visual metaphor. We also collect high-quality and rich metaphor annotations (abstract objects, concepts, relationships along with their corresponding object boxes) as there do not exist any datasets that facilitate the evaluation of these tasks. We perform a comprehensive analysis of state-of-the-art models in vision and language based on our annotations, highlighting strengths and weaknesses of current approaches in visual metaphor Classification, Localization, Understanding (retrieval, question answering, captioning) and gEneration (text-to-image synthesis) tasks. We hope this work provides a concrete step towards developing AI systems with human-like creative capabilities.
translated by 谷歌翻译
Large-scale diffusion models have achieved state-of-the-art results on text-to-image synthesis (T2I) tasks. Despite their ability to generate high-quality yet creative images, we observe that attribution-binding and compositional capabilities are still considered major challenging issues, especially when involving multiple objects. In this work, we improve the compositional skills of T2I models, specifically more accurate attribute binding and better image compositions. To do this, we incorporate linguistic structures with the diffusion guidance process based on the controllable properties of manipulating cross-attention layers in diffusion-based T2I models. We observe that keys and values in cross-attention layers have strong semantic meanings associated with object layouts and content. Therefore, we can better preserve the compositional semantics in the generated image by manipulating the cross-attention representations based on linguistic insights. Built upon Stable Diffusion, a SOTA T2I model, our structured cross-attention design is efficient that requires no additional training samples. We achieve better compositional skills in qualitative and quantitative results, leading to a 5-8% advantage in head-to-head user comparison studies. Lastly, we conduct an in-depth analysis to reveal potential causes of incorrect image compositions and justify the properties of cross-attention layers in the generation process.
translated by 谷歌翻译
Prompt tuning is a new few-shot transfer learning technique that only tunes the learnable prompt for pre-trained vision and language models such as CLIP. However, existing prompt tuning methods tend to learn spurious or entangled representations, which leads to poor generalization to unseen concepts. Towards non-spurious and efficient prompt learning from limited examples, this paper presents a novel \underline{\textbf{C}}ounterfactual \underline{\textbf{P}}rompt \underline{\textbf{L}}earning (CPL) method for vision and language models, which simultaneously employs counterfactual generation and contrastive learning in a joint optimization framework. Particularly, CPL constructs counterfactual by identifying minimal non-spurious feature change between semantically-similar positive and negative samples that causes concept change, and learns more generalizable prompt representation from both factual and counterfactual examples via contrastive learning. Extensive experiments demonstrate that CPL can obtain superior few-shot performance on different vision and language tasks than previous prompt tuning methods on CLIP. On image classification, we achieve 3.55\% average relative improvement on unseen classes across seven datasets; on image-text retrieval and visual question answering, we gain up to 4.09\% and 25.08\% relative improvements across three few-shot scenarios on unseen test sets respectively.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
有几篇论文正确包括人工智能(AI)培训数据中的少数群体,以改善对少数群体和/或一般社会的测试推论。一般社会由少数派和多数利益相关者组成。一个普遍的误解是,少数群体的包容性不会单独提高多数群体的绩效。在本文中,我们令人惊讶的是,包括少数样本可以改善多数族裔的测试错误。换句话说,少数群体的包容性会导致多数群体增强(MIME)的性能。给出了哑剧效应的理论存在证明,并发现与六个不同数据集的实验结果一致。项目网页:https://visual.ee.ucla.edu/mime.htm/
translated by 谷歌翻译
有效的人类组合需要能够传达团队目标的能力和您需要代理商进行操作的约束。提供指定团队共享意图或操作标准的能力,可以使AI代理执行其主要功能,同时仍然能够满足当前团队的特定愿望。尽管已经开展了重要的工作来指导代理通过语言或演示执行任务,但先前的工作缺乏专注于可以在团队指定的参数中运行的建筑物。更糟糕的是,缺乏有关使人类通过非结构化的自然主义语言提供其规范的研究。在本文中,我们建议将目标和约束用作调节和评估自治药物的脚手架。我们通过介绍一个新颖的数据集和相关的数据收集协议来为这一领域做出贡献,该协议将语言描述映射到与人参与者为棋盘游戏风险开发的特定策略相对应的目标和约束。利用最先进的语言模型和增强程序,我们开发了一个机器学习框架,该框架可用于从非结构化策略描述中识别目标和约束。为了验证我们的方法,我们进行了一项人为主体研究,以建立我们的数据集的人类基础。我们的结果表明,与执行同一机器翻译任务的人类评估者相比,我们的机器学习体系结构能够更好地将非结构化语言描述解释为策略规范(F(1,272.53)= 17.025,p <0.001)。
translated by 谷歌翻译
多语言语音识别已引起大幅关注,作为补偿低资源语言数据稀缺性的有效方法。端到端(E2E)建模比常规混合系统优选,这主要是由于没有词典要求。但是,在有限的数据方案中,混合DNN-HMM仍然优于E2E模型。此外,手动词典创建的问题已通过公开训练的素式训练型(G2P)(G2P)和多种语言的IPA音译来缓解。在本文中,在低资源语言的多语言设置中提出了一种混合DNN-HMM声学模型的新型方法。针对目标语言语言信号的不同单语言模型的后验分布融合在一起。为每个源目标语言对训练了一个单独的回归神经网络,以将后者从源声学模型转换为目标语言。与ASR培训相比,这些网络需要非常有限的数据。与多语言和单语基线相比,后融合的相对增益分别为14.65%和6.5%。跨语性模型融合表明,无需使用依赖语言的ASR的后代,就可以实现可比的结果。
translated by 谷歌翻译
从一个人的错误中学习是一种有效的人类学习技术,学习者更多地关注在犯错误的主题上,以便加深他们的理解。在本文中,我们调查这种人类学习策略是否可以应用于机器学习。我们提出了一种新的机器学习方法,称为来自错误(LFM)的学习,其中学习者通过在修订期间更多地关注错误来提高其学习的能力。我们制定LFM作为三阶段优化问题:1)学习者学习;2)学习者重新学习专注于错误,而且;3)学习者验证其学习。我们开发了一种有效的算法来解决LFM问题。我们将LFM框架应用于CiFar-10,CiFar-100和ImageNet上的神经架构搜索。实验结果强烈展示了我们模型的有效性。
translated by 谷歌翻译
因果检测在自然语言处理和语言学研究领域吸引了很多关注。它具有信息检索,事件预测,问题回答,财务分析和市场研究的基本应用。在本研究中,我们探讨了几种方法来使用变压器识别和提取金融文件中的原因对。为此目的,我们提出了一种与BIO方案结合POS标记的方法,可以与现代变压器模型集成,以解决识别给定文本中的因果关系的这一挑战。我们的最佳方法学达到0.9551的F1分,在Fincausal-2021在Fincausal-2021研讨会上的盲试验中精确匹配得分为0.8777。
translated by 谷歌翻译
通过将云资源转换为用户的邻近来减轻云计算所拥有的限制来引入雾计算。雾环境使其有限的资源可用于大量用户部署其无服务器的应用程序,由多个无服务器功能组成。引入迷雾环境背后的主要意图是通过其有限的资源来满足延迟和位置敏感无服务器应用程序的需求。最近的研究主要侧重于将最大资源分配给来自FOG节点的这些应用程序,而不是充分利用云环境。这引入了在将资源提供给最大连接用户的负面影响。为了解决此问题,在本文中,我们调查了用户请求的最佳百分比,该请求应由雾和云实现。因此,我们提出了Def-Driel,系统地部署了使用深度增强学习的雾和云环境中无服务器功能,使用若干现实生活参数,例如来自附近FOG节点,用户的优先级的用户的距离和延迟,与最近的相关算法相比,无服务器应用程序的优先级及其资源需求等。从模拟和比较结果,可以清楚地观察到其对其他算法的优势及其对现实生活场景的适用性。
translated by 谷歌翻译